Cloud and Cache-Aided Wireless Networks: Fundamental Latency Trade-Offs

نویسندگان

  • Avik Sengupta
  • Ravi Tandon
  • Osvaldo Simeone
چکیده

A cloud and cache-aided wireless network architecture is studied in which edge-nodes (ENs), such as base stations, are connected to a cloud processor via dedicated fronthaul links, while also being endowed with caches. Cloud processing enables the centralized implementation of cooperative transmission strategies at the ENs, albeit at the cost of an increased latency due to fronthaul transfer. In contrast, the proactive caching of popular content at the ENs allows for the low-latency delivery of the cached files, but with generally limited opportunities for cooperative transmission among the ENs. The interplay between cloud processing and edge caching is addressed from an information-theoretic viewpoint by investigating the fundamental limits of a high Signal-to-Noise-Ratio (SNR) metric, termed normalized delivery time (NDT), which captures the worst-case latency for delivering any requested content to the users. The NDT is defined under the assumption of either serial or pipelined fronthaul-edge transmissions, and is studied as a function of fronthaul and cache capacity constraints. Transmission policies that encompass the caching phase as well as the transmission phase across both fronthaul and wireless, or edge, segments are proposed, with the aim of minimizing the NDT for given fronthaul and cache capacity constraints. Informationtheoretic lower bounds on the NDT are also derived. Achievability arguments and lower bounds are leveraged to characterize the minimal NDT in a number of important special cases, including systems with no caching capability, as well as to prove that the proposed schemes achieve optimality within a constant multiplicative factor of 2 for all values of the problem parameters.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Fundamental Limits of Cloud and Cache-Aided Interference Management with Multi-Antenna Base Stations

In fog-aided cellular systems, content delivery latency can be minimized by jointly optimizing edge caching and transmission strategies. In order to account for the cache capacity limitations at the Edge Nodes (ENs), transmission generally involves both fronthaul transfer from a cloud processor with access to the content library to the ENs, as well as wireless delivery from the ENs to the users...

متن کامل

Storage-Latency Trade-off in Cache-Aided Fog Radio Access Networks

A fog radio access network (F-RAN) is studied, in which KT edge nodes (ENs) connected to a cloud server via orthogonal fronthaul links, serve KR users through a wireless Gaussian interference channel. Both the ENs and the users have finite-capacity cache memories, which are filled before the user demands are revealed. While a centralized placement phase is used for the ENs, which model static b...

متن کامل

Fundamental Limits on Delivery Time in Cloud- and Cache-Aided Heterogeneous Networks

A Fog radio access network (F-RAN) is considered as a network architecture candidate to meet the soaring demand in terms of reliability, spectral efficiency, and latency in next generation wireless networks. This architecture combines the benefits associated with centralized cloud processing and wireless edge caching enabling primarily low-latency transmission under moderate fronthaul capacity ...

متن کامل

On-Demand Delivery of Software in Mobile Environments

In this paper we describe ACHILLES, a system for ondemand delivery of software from stationary servers to mobile clients over wireless network links. ACHILLES supports disconnected operations and employs mechanisms for hiding latency of slow networks. It's main features are a cache replacement policy that uses a simple cost model to determine which software components should be removed from the...

متن کامل

Fundamental Latency Trade-offs in Architecting DRAM Caches Outperforming Impractical SRAM-Tags with a Simple and Practical Design

This paper analyzes the design trade-offs in architecting large-scale DRAM caches. Prior research, including the recent work from Loh and Hill, have organized DRAM caches similar to conventional caches. In this paper, we contend that some of the basic design decisions typically made for conventional caches (such as serialization of tag and data access, large associativity, and update of replace...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1605.01690  شماره 

صفحات  -

تاریخ انتشار 2016